Journals
  Publication Years
  Keywords
Search within results Open Search
Please wait a minute...
For Selected: Toggle Thumbnails
Deep domain adaptation model with multi-scale residual attention for incipient fault detection of bearings
MAO Wentao, YANG Chao, LIU Yamin, TIAN Siyu
Journal of Computer Applications    2020, 40 (10): 2890-2898.   DOI: 10.11772/j.issn.1001-9081.2020030329
Abstract340)      PDF (2274KB)(469)       Save
Aiming at the problems of poor reliability and high false alarm rate of the fault detection models of bearings caused by the differences in working environment and equipment status, a multi-scale attention deep domain adaptation model was proposed according to the characteristics and needs of incipient fault detection. First, the monitoring signal was pre-processed into a three-channel data consisting of the original signal, Hilbert-Huang transform marginal spectrum and frequency spectrum. Second, the filters of different sizes were added into the residual attention module to extract multi-scale deep features, and the convolution-deconvolution operation was used to reconstruct the input information in order to obtain attention information, then a multi-scale residual attention module was constructed by combining the attention information and multi-scale features and was used to extract the attention features with stronger ability of representing incipient faults. Third, a loss function based on the cross entropy and Maximum Mean Discrepancy (MMD) regularization constraints was constructed to achieve the domain adaptation on the basis of the extracted attention features. Finally, a stochastic gradient descent algorithm was used to optimize the network parameters, and an end-to-end incipient fault detection model was established. Comparative experiments were conducted on the IEEE PHM-2012 Data Challenge dataset. Experimental results show that, compared with eight representative incipient fault detection and diagnosis methods as well as transfer learning algorithms, the proposed method can obtain the reduction of 62.7% and 61.3% in the average false alarm rate while keeping the alarm location not delayed, and effectively improves the robustness of incipient fault detection.
Reference | Related Articles | Metrics
Best and worst coyotes strengthened coyote optimization algorithm and its application to quadratic assignment problem
ZHANG Xinming, WANG Doudou, CHEN Haiyan, MAO Wentao, DOU Zhi, LIU Shangwang
Journal of Computer Applications    2019, 39 (10): 2985-2991.   DOI: 10.11772/j.issn.1001-9081.2019030454
Abstract670)      PDF (1090KB)(296)       Save
In view of poor performance of Coyote Optimization Algorithm (COA), a Best and Worst coyotes strengthened COA (BWCOA) was proposed. Firstly, for growth of the worst coyote in the group, a global optimal coyote guiding operation was introduced on the basis of the optimal coyote guidance to improve the social adaptability (local search ability) of the worst coyote. Then, a random perturbation operation was embedded in the growth process of the optimal coyote in the group, which means using the random perturbation between coyotes to promote the development of the coyotes and make full play of the initiative of each coyotes in the group to improve the diversity of the population and thus to enhance the global search ability, while the growing pattern of the other coyotes kept unchanged. BWCOA was applied to complex function optimization and Quadratic Assignment Problem (QAP) using hospital department layout as an example. Experimental results on CEC-2014 complex functions show that compared with COA and other state-of-the-art algorithms, BWCOA obtains 1.63 in the average ranking and 1.68 rank mean in the Friedman test, both of the results are the best. Experimental results on 6 QAP benchmark sets show that BWCOA obtains the best mean values for 5 times. These prove that BWCOA is more competitive.
Reference | Related Articles | Metrics
Smoke recognition based on deep transfer learning
WANG Wenpeng, MAO Wentao, HE Jianliang, DOU Zhi
Journal of Computer Applications    2017, 37 (11): 3176-3181.   DOI: 10.11772/j.issn.1001-9081.2017.11.3176
Abstract836)      PDF (1219KB)(776)       Save
For smoke recognition problem, the traditional recognition methods based on sensor and image feature are easily affected by the external environment, which would lead to low recognition precision if the flame scene and type change. The recognition method based on deep learning requires a large amount of data, so the model recognition ability is weak when the smoke data is missing or the data source is restricted. To overcome these drawbacks, a new smoke recognition method based on deep transfer learning was proposed. The main idea was to conduct smoke feature transfer by means of VGG-16 (Visual Geometry Group) model with setting ImageNet dataset as source data. Firstly, all image data were pre-processed, including random rotation, cut and overturn, etc. Secondly, VGG-16 network was introduced to transfer the features in the convolutional layers, and to connect the fully connected layers network pre-trained by smoke data. Finally, the smoke recognition model was achieved. Experiments were conducted on open datasets and real-world smoke images. The experimental results show that the accuracy of the proposed method is higher than those of current smoke image recognition methods, and the accuracy is more than 96%.
Reference | Related Articles | Metrics
Fast flame recognition approach based on local feature filtering
MAO Wentao, WANG Wenpeng, JIANG Mengxue, OUYANG Jun
Journal of Computer Applications    2016, 36 (10): 2907-2911.   DOI: 10.11772/j.issn.1001-9081.2016.10.2907
Abstract546)      PDF (819KB)(494)       Save
For flame recognition problem, the traditional recognition methods based on physical signal are easily affected by the external environment. Meanwhile, most of the current methods based on feature extraction of flame image are less discriminative to different scene and flame type, and then have lower recognition precision if the flame scene and type change. To overcome this drawback, a new fast recognition method for flame image was proposed by introducing colorspace information into Scale Invariant Feature Transform (SIFT) algorithm. Firstly, the feature descriptors of flame were extracted by SIFT algorithm from the frame images which were obtained from flame video. Secondly, the local noisy feature points were filtered by introducing the feature information of flame colorspace, and the feature descriptors were transformed into feature vectors by means of Bag Of Keypoints (BOK). Finally, Extreme Learning Machine (ELM) was utilized to establish a fast flame recognition model. Experiments were conducted on open flame datasets and real-life flame images. The results show that for different flame scenes and types the accuracy of the proposed method is more than 97%, and the recognition time is just 2.19 s for test set which contains 4301 images. In addition, comparing with the other three methods such as support vector machine based on entropy, texture and flame spread rate, support vector machine based on SIFT and fire specialty in color space, ELM based on SIFT and fire specialty in color space, the proposed method outperforms in terms of recognition accuracy and speed.
Reference | Related Articles | Metrics
Hybrid sampling extreme learning machine for sequential imbalanced data
MAO Wentao, WANG Jinwan, HE Ling, YUAN Peiyan
Journal of Computer Applications    2015, 35 (8): 2221-2226.   DOI: 10.11772/j.issn.1001-9081.2015.08.2221
Abstract481)      PDF (882KB)(379)       Save

Many traditional machine learning methods tend to get biased classifier which leads to lower classification precision for minor class in sequential imbalanced data. To improve the classification accuracy of minor class, a new hybrid sampling online extreme learning machine on sequential imbalanced data was proposed. This algorithm could improve the classification accuracy of minor class as well as reduce the loss of classification accuracy of major class, which contained two stages. In offline stage, the principal curve was introduced to model the confidence regions of minor class and major class respectively based on the strategy of balanced samples. Over-sampling of minority and under-sampling of majority was achieved based on confidence region. Then the initial model was established. In online stage, only the most valuable samples of major class were chosen according to the sample importance, and then the network weight was updated dynamically. The proposed algorithm had upper bound of the information loss through the theoretical proof. The experiment was taken on two UCI datasets and the real-world air pollutant forecasting dataset of Macao. The experimental results show that, compared with the existing methods such as Online Sequential Extreme Learning Machine (OS-ELM), Extreme Learning Machine (ELM) and Meta-Cognitive Online Sequential Extreme Learning Machine (MCOS-ELM), the proposed method has higher prediction precision and better numerical stability.

Reference | Related Articles | Metrics
Weighted online sequential extreme learning machine based on imbalanced sample-reconstruction
WANG Jinwan, MAO Wentao, HE Ling, WANG Liyun
Journal of Computer Applications    2015, 35 (6): 1605-1610.   DOI: 10.11772/j.issn.1001-9081.2015.06.1605
Abstract614)      PDF (842KB)(589)       Save

Many traditional machine learning methods tend to get biased classifier which leads to low classification precision for minor class in imbalanced online sequential data. To improve the classification accuracy of minor class, a new weighted online sequential extreme learning machine based on imbalanced sample-reconstruction was proposed. The algorithm started from exploiting distributed characteristics of online sequential data, and contained two stages. In offline stage, the principal curve was introduced to construct the confidence region, where over-sampling was achieved for minor class to construct the equilibrium sample set which was consistent with the sample distribution trend, and then the initial model was established. In online stage, a new weighted method was proposed to update sample weight dynamically, where the value of weight was related to training error. The proposed method was evaluated on UCI dataset and Macao meteorological data. Compared with the existing methods, such as Online Sequential-Extreme Learning Machine (OS-ELM), Extreme Learning Machine (ELM)and Meta-Cognitive Online Sequential- Extreme Learning Machine (MCOS-ELM), the experimental results show that the proposed method can identify the minor class with a higher ability. Moreover, the training time of the proposed method has not much difference compared with the others, which shows that the proposed method can greatly increase the minor prediction accuracy without affecting the complexity of algorithm.

Reference | Related Articles | Metrics
Hyper-spherical multi-task learning algorithm with adaptive grouping
MAO Wentao WANG Haicheng LIU Shangwang
Journal of Computer Applications    2014, 34 (7): 2061-2065.   DOI: 10.11772/j.issn.1001-9081.2014.07.2061
Abstract177)      PDF (741KB)(443)       Save

To solve the problem in most of conventional multi-task learning algorithms which evaluate risk independently for single task and lack uniform constraint across all tasks, a new hyper-spherical multi-task learning algorithm with adaptive grouping was proposed in this paper. Based on Extreme Learning Machine (ELM) as basic framework, this algorithm introduced hyper-spherical loss function to evaluate the risks of all tasks uniformly, and got decision model via iterative reweighted least squares solution. Furthermore, considering the existence of relatedness between tasks, this paper also constructed regularizer with grouping structure based on the assumption that related tasks had more similar weight vector, which would make the tasks in same group be trained independently. Finally, the optimization object was transformed into a mixed 0-1 programming problem, and a multi-objective method was utilized to identify optimal grouping structure and get model parameters. The simulation results on toy data and cylindrical vibration signal data show that the proposed algorithm outperforms state-of-the-art methods in terms of generalization performance and the ability of identifying inner structure in tasks.

Reference | Related Articles | Metrics
Model selection of extreme learning machine based on latent feature space
MAO Wentao ZHAO Zhongtang HE Huanhuan
Journal of Computer Applications    2013, 33 (06): 1600-1603.   DOI: 10.3724/SP.J.1087.2013.01600
Abstract802)      PDF (623KB)(672)       Save
Recently, Extreme Learning Machine (ELM) has been a promising tool in solving a wide range of classification and regression problems. However, the generalization performance of ELM will be decreased when there exits redundant hidden neurons. To solve this problem, this paper introduced a new regularizer that was the Frobenius norm of mapping matrix from hidden space to a new latent feature space. Furthermore, an alternating optimization strategy was adopted to learn the above regularization problem and the latent feature space. The proposed algorithm was tested empirically on the classical UCI data set as well as a load identification engineering data set. The experimental results show that the proposed algorithm obviously outperforms the classical ELM in terms of predictive precision and numerical stability, and needs much less computational cost than the present ELM model selection algorithm.
Reference | Related Articles | Metrics
Multi-input-multi-output support vector machine based on principal curve
MAO Wentao ZHAO Shengjie ZHANG Junna
Journal of Computer Applications    2013, 33 (05): 1281-1293.   DOI: 10.3724/SP.J.1087.2013.01281
Abstract1106)      PDF (761KB)(588)       Save
To solve the problem that the traditional Multi-Input-Multi-Output (MIMO) Support Vector Machine (SVM) generally ignore the dependency among all outputs, a new MIMO SVM algorithm based on principal curve was proposed in this paper. Following the assumption that the model parameters of all outputs locate on a manifold, this paper firstly constructed a manifold regularization based on the Multi-dimensional Support Vector Regression (M-SVR), where the regularizer was the squared distance from the output parameters to the principal curve through the middle of all parameters' set. Secondly, considering the non-convexity of this regularization, this paper introduced an alternative optimization method to calculate the model parameters and principal curve in turn until convergence. The experiments on simulated data and real-life dynamic load identification data were conducted, and the results show that the proposed algorithm performs better than M-SVR and SVM based separate modeling method in terms of prediction precision and numerical stability.
Reference | Related Articles | Metrics